4 result(s)
Page Size: 10, 20, 50
Export: bibtex, xml, json, csv
Order by:

CNR Author operator: and / or
Typology operator: and / or
Language operator: and / or
Date operator: and / or
Rights operator: and / or
2023 Conference article Open Access OPEN
Score vs. winrate in score-based games: which reward for reinforcement learning?
Pasqualini L., Parton M., Morandin F., Amato G., Gini R., Metta C., Fantozzi M., Marchetti A.
In the last years, DeepMind algorithm AlphaZero has become the state of the art to efficiently tackle perfect information two-player zero-sum games with a win/lose outcome. However, when the win/lose outcome is decided by a final score difference, AlphaZero may play score-suboptimal moves, because all winning final positions are equivalent from the win/lose outcome perspective. This can be an issue, for instance when used for teaching, or when trying to understand whether there is a better move. Moreover, there is the theoretical quest of the perfect game. A naive approach would be training a AlphaZero-like agent to predict score differences instead of win/lose outcomes. Since the game of Go is deterministic, this should as well produce outcome-optimal play. However, it is a folklore belief that "this does not work".In this paper we first provide empirical evidence to this belief. We then give a theoretical interpretation of this suboptimality in a general perfect information two-player zero-sum game where the complexity of a game like Go is replaced by randomness of the environment. We show that an outcome-optimal policy has a different preference for uncertainty when it is winning or losing. In particular, when in a losing state, an outcome-optimal agent chooses actions leading to a higher variance of the score. We then posit that when approximation is involved, a deterministic game behaves like a nondeterministic game, where the score variance is modeled by how uncertain the position is. We validate this hypothesis in a AlphaZero-like software with a human expert.Source: 2022 21st IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 573–578, Nassau, Bahamas, 12-14/12/2022
DOI: 10.1109/icmla55696.2022.00099
DOI: 10.48550/arxiv.2201.13176
Metrics:


See at: arXiv.org e-Print Archive Open Access | ISTI Repository Open Access | doi.org Restricted | doi.org Restricted | ieeexplore.ieee.org Restricted | CNR ExploRA


2023 Journal article Open Access OPEN
Improving trust and confidence in medical skin lesion diagnosis through explainable deep learning
Metta C., Beretta A., Guidotti R., Yin Y., Gallinari P., Rinzivillo S., Giannotti F.
A key issue in critical contexts such as medical diagnosis is the interpretability of the deep learning models adopted in decision-making systems. Research in eXplainable Artificial Intelligence (XAI) is trying to solve this issue. However, often XAI approaches are only tested on generalist classifier and do not represent realistic problems such as those of medical diagnosis. In this paper, we aim at improving the trust and confidence of users towards automatic AI decision systems in the field of medical skin lesion diagnosis by customizing an existing XAI approach for explaining an AI model able to recognize different types of skin lesions. The explanation is generated through the use of synthetic exemplar and counter-exemplar images of skin lesions and our contribution offers the practitioner a way to highlight the crucial traits responsible for the classification decision. A validation survey with domain experts, beginners, and unskilled people shows that the use of explanations improves trust and confidence in the automatic decision system. Also, an analysis of the latent space adopted by the explainer unveils that some of the most frequent skin lesion classes are distinctly separated. This phenomenon may stem from the intrinsic characteristics of each class and may help resolve common misclassifications made by human experts.Source: International Journal of Data Science and Analytics (Print) (2023). doi:10.1007/s41060-023-00401-z
DOI: 10.1007/s41060-023-00401-z
Project(s): TAILOR via OpenAIRE, HumanE-AI-Net via OpenAIRE, XAI via OpenAIRE, SoBigData-PlusPlus via OpenAIRE
Metrics:


See at: International Journal of Data Science and Analytics Open Access | link.springer.com Open Access | ISTI Repository Open Access | CNR ExploRA


2022 Conference article Open Access OPEN
Exemplars and counterexemplars explanations for skin lesion classifiers
Metta C., Guidotti R., Yin Y., Gallinari P., Rinzivillo S.
Explainable AI consists in developing models allowing interaction between decision systems and humans by making the decisions understandable. We propose a case study for skin lesion diagnosis showing how it is possible to provide explanations of the decisions of deep neural network trained to label skin lesions.Source: HHAI2022 - Augmenting Human Intellect, pp. 258–260, Amsterdam, The Netherlands, 13-17/07/2022
DOI: 10.3233/faia220209
Project(s): HumanE-AI-Net via OpenAIRE
Metrics:


See at: ebooks.iospress.nl Open Access | ISTI Repository Open Access | CNR ExploRA


2021 Conference article Open Access OPEN
Exemplars and counterexemplars explanations for image classifiers, targeting skin lesion labeling
Metta C., Guidotti R., Yin Y., Gallinari P., Rinzivillo S.
Explainable AI consists in developing mechanisms allowing for an interaction between decision systems and humans by making the decisions of the formers understandable. This is particularly important in sensitive contexts like in the medical domain. We propose a use case study, for skin lesion diagnosis, illustrating how it is possible to provide the practitioner with explanations on the decisions of a state of the art deep neural network classifier trained to characterize skin lesions from examples. Our framework consists of a trained classifier onto which an explanation module operates. The latter is able to offer the practitioner exemplars and counterexemplars for the classification diagnosis thus allowing the physician to interact with the automatic diagnosis system. The exemplars are generated via an adversarial autoencoder. We illustrate the behavior of the system on representative examples.Source: ISCC 2021 - IEEE Symposium on Computers and Communications, Athens, Greece, 5-8/09/2021
DOI: 10.1109/iscc53001.2021.9631485
Project(s): AI4EU via OpenAIRE, TAILOR via OpenAIRE, HumanE-AI-Net via OpenAIRE
Metrics:


See at: ISTI Repository Open Access | doi.org Restricted | ieeexplore.ieee.org Restricted | CNR ExploRA